Skip to content

Conversation

@Elvis339
Copy link
Contributor

Add local developer tooling for triggering C-Chain reexecution benchmarks. Includes just recipe, gh CLI in nix devshell, and usage documentation in METRICS.md.

Note: should be merged after #1493

Why this should be merged

Enables developers to trigger C-Chain reexecution benchmarks from their local environment without navigating GitHub Actions UI. This complements the track-performance.yml workflow by providing a CLI interface.

How this works

The bench-cchain recipe:

  1. Validates inputs (predefined test name OR custom block params)
  2. Triggers track-performance.yml via gh workflow run
  3. Polls for workflow registration (handles GitHub's async dispatch)
  4. Watches the run until completion

Environment variables support custom configurations:

  • FIREWOOD_REF - specific commit/tag to benchmark
  • AVALANCHEGO_REF - AvalancheGo version to test against
  • START_BLOCK, END_BLOCK, BLOCK_DIR_SRC - custom block ranges

How this was tested

nix run ./ffi#gh -- gh auth login

GH_TOKEN=$(gh auth token) RUNNER=avalanche-avalanchego-runner-2ti FIREWOOD_REF=v0.0.18 CONFIG=firewood START_BLOCK=1 END_BLOCK=100 BLOCK_DIR_SRC=cchain-mainnet-blocks-200-ldb just bench-cchain

Add local developer tooling for triggering C-Chain reexecution
benchmarks. Includes just recipe, gh CLI in nix devshell, and
usage documentation in METRICS.md.
Copy link
Member

@rkuris rkuris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All my changes are just nits, feel free to use or ignore them and get this merged.

echo "https://github.com/ava-labs/firewood/actions/runs/$run_id"
echo ""

$GH run watch "$run_id"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to use this or watch it on the web? Maybe we shouldn't run this and let them choose.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the script already resolves and wraps the gh CLI (falling back to nix if needed), waiting by default keeps the experience consistent for users who have the tooling set up. The URL is printed before the watch starts, so users can open it in their browser and Ctrl+C to exit if they prefer the web UI.

# Resolve gh CLI

Does that work for you, or would you prefer adding a something like no-wait flag?

Co-authored-by: Ron Kuris <ron.kuris@avalabs.org>
Signed-off-by: Elvis <43846394+Elvis339@users.noreply.github.com>
Elvis339 added a commit that referenced this pull request Jan 28, 2026
## Why

Track C-Chain reexecution benchmark performance over time. Catch
regressions before production.

Closes #1494

## How

Firewood → triggers AvalancheGo benchmark → downloads results →
publishes to GitHub Pages

<img width="746" height="509" alt="Screenshot 2026-01-27 at 16 12 02"
src="https://github.com/user-attachments/assets/ff372ef3-67b8-450e-8874-5289517130c8"
/>

## Changes

- `scripts/bench-cchain-reexecution.sh`
  - Trigger AvalancheGo's C-Chain reexecution benchmark
  - Poll for workflow registration
  - Wait for completion and download artifacts

- `.github/workflows/track-performance.yml`
  - Orchestrate benchmark trigger via the script
- Publish results to GitHub Pages (main → `bench/`, branches →
`dev/bench/{branch}/`)

## Usage
```
# Auth
nix run ./ffi#gh -- auth login
export GH_TOKEN=$(nix run ./ffi#gh -- auth token)

# Predefined test
./scripts/bench-cchain-reexecution.sh trigger firewood-101-250k

# With specific Firewood version
FIREWOOD_REF=v0.0.18 ./scripts/bench-cchain-reexecution.sh trigger firewood-101-250k

# Custom block range
RUNNER=avalanche-avalanchego-runner-2ti \
FIREWOOD_REF=v0.0.18 \
CONFIG=firewood \
START_BLOCK=1 \
END_BLOCK=100 \
BLOCK_DIR_SRC=cchain-mainnet-blocks-200-ldb \
./scripts/bench-cchain-reexecution.sh trigger

# Other commands
./scripts/bench-cchain-reexecution.sh tests    # list available tests
./scripts/bench-cchain-reexecution.sh list     # list recent runs
./scripts/bench-cchain-reexecution.sh status <run_id>
```
- Set `FIREWOOD_REF=v0.0.18` explicitly. Without it, the workflow builds
from `HEAD`, which currently fails due to changes in FFI layer

### Related
AvalancheGo: ava-labs/avalanchego#4650
Local iteration PR: #1642
Followup: #1639

---------

Signed-off-by: Elvis <43846394+Elvis339@users.noreply.github.com>
Co-authored-by: rodrigo <77309055+RodrigoVillar@users.noreply.github.com>
Co-authored-by: Ron Kuris <ron.kuris@avalabs.org>
Comment on lines +349 to +352
just bench-cchain firewood-101-250k

# With specific Firewood version
FIREWOOD_REF=v0.1.0 just bench-cchain firewood-33m-40m
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you mentioned that just supports named parameters. If so, could you please used named parameters wherever possible?

Comment on lines +373 to +390
| Variable | Default | Description |
|----------|---------|-------------|
| `FIREWOOD_REF` | current commit | Firewood commit/tag/branch to build |
| `AVALANCHEGO_REF` | master | AvalancheGo ref to test against |
| `LIBEVM_REF` | - | Optional libevm ref |
| `RUNNER` | avalanche-avalanchego-runner-2ti | GitHub Actions runner |
| `TIMEOUT_MINUTES` | - | Workflow timeout |
| `DOWNLOAD_DIR` | ./results | Directory for downloaded artifacts |

**Custom mode variables** (when no test specified):

| Variable | Default | Description |
|----------|---------|-------------|
| `CONFIG` | firewood | VM config (firewood, hashdb, etc.) |
| `START_BLOCK` | required | First block number |
| `END_BLOCK` | required | Last block number |
| `BLOCK_DIR_SRC` | required | S3 block directory |
| `CURRENT_STATE_DIR_SRC` | - | S3 state directory (empty = genesis run) |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that this is a repetition of:

# ENVIRONMENT
# GH_TOKEN GitHub token for API access (required)
# TEST Predefined test name, alternative to arg (optional)
# FIREWOOD_REF Firewood commit/tag/branch, empty = AvalancheGo's go.mod default (optional)
# AVALANCHEGO_REF AvalancheGo ref to test against (default: master)
# RUNNER GitHub Actions runner label (default: avalanche-avalanchego-runner-2ti)
# LIBEVM_REF libevm ref (optional)
# TIMEOUT_MINUTES Workflow timeout in minutes (optional)
# DOWNLOAD_DIR Directory for downloaded artifacts (default: ./results)
#
# Custom mode (when no TEST/test arg specified):
# CONFIG VM config (default: firewood)
# START_BLOCK First block number (required)
# END_BLOCK Last block number (required)
# BLOCK_DIR_SRC S3 block directory, e.g., cchain-mainnet-blocks-200-ldb (required)
# CURRENT_STATE_DIR_SRC S3 state directory, empty = genesis run (optional)

Would it make sense to just tell clients to look at the documentation in scripts/bench-cchain-reexecution.sh for all possible options?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the requirements of adding GAB to Firewood was that developers would be able to create a branch from main, push a commit, and run GAB against their new branch. From this though, it's not evident that it's supported and that there's no documentation for this common workflow. Could we add a section walking through what running benchmarks against a branch would look like?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

c-chain DO NOT MERGE This PR is not meant to be merged in its current state testing

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants